From the EFF's "Platform Censorship: Lessons From the Copyright Wars", Corynne McSherry makes some points:
https://blog.ethereum.world/community-moderation-myths-goals/
- A team of central moderators that just can't keep up with the volume of interactions requiring their attention
- The value of engaging in moderating processes is considered insufficient
- Moderating processes are perceived as unfair
- Those doing the moderating cannot relate to the context in question
- Moderating processes are too binary (e.g. expulsion is the only punishment available).
Two Concepts of Liberty and Infinite Permutations of Moderating"We need to figure out together how best to accommodate multiple contexts and naturally varying expectations; how much variation can one 'world' satisfy, and when might a new 'world' be necessitated?""
"The Law of Requisite Variety asserts that a system's control mechanism (i.e. the governing, specifically the moderating in the context here) must be capable of exhibiting more states than the system itself. Failure to engineer for this sets the system up to fail." Philip Sheldrake ref "Boisot and McKelvey updated this law to the ‘Law of Requisite Complexity’, that holds that, in order to be efficaciously adaptive, the internal complexity of a system must match the external complexity it confronts. A further practical application of this law is the view that information systems (IS) alignment is a continuous coevolutionary process that reconciles top-down ‘rational designs’ and bottom-up ‘emergent processes’ of consciously and coherently interrelating all components of the Business/IS relationships in order to contribute to an organization’s performance over time" https://en.wikipedia.org/wiki/Variety_(cybernetics)#Law_of_requisite_variety
https://www.cleanuptheinternet.org.uk/ A simple proposal to limit harassment online with three steps:
https://gitlab.com/spritely/ocappub "“networks of consent”: explicit and intentional connections between different users and entities on the network. The idea of “networks of consent” is then implemented on top of a security paradigm called “object capabilities”, which as we will see can be neatly mapped on top of the actor model, on which ActivityPub is based."
https://www.cblgh.org/dl/trustnet-cblgh.pdf - Alexander Cobleigh, University of Lund, dept of Automatic Control - thesis explores how to handle moderation in a decentralised context.
"Appleseed [Ziegler and Lausen, 2005] is a trust propagation algorithm and trust metric for local group trust computation. Basically, Appleseed makes it possible to take a group of nodes—which have various trust relations to each other—look at the group from the perspective of a single node, and rank each of the other nodes according to how trusted they are from the perspective of the single node." (ref- the TrustNet essay). Paper: https://www.researchgate.net/publication/2935185_Spreading_Activation_Models_for_Trust_Propagation
One of the oldest and most trusted models for establishing trust in published work. Some proposed innovation around it:
Many decentralised systems, such as [[Holochain]] offer a list of public keys or IP addresses or user accounts which should be avoided, and whose activities shouldn't be seeded/shared. Users by default follow the deny list of the central administrator, but can set their own overrides, or even subscribe to another user's watch/deny lists.